Notes to self

Automating PostgreSQL backups to Digital Ocean Spaces with Kamal

Here’s a short tutorial on how I set up automated PostgreSQL backups to a DO Spaces bucket with Kamal.

PostgreSQL backups

If you are like me, you found yourself deploying PostgreSQL databases to Digital Ocean droplets with the new Kamal deployment tool. However, getting the database up and running is a job half-done. We need backups.

Since we are on Digital Ocean, we might as well utilize Digital Ocean Spaces, an S3-compatible object storage. That also means we can usually use any S3 tooling to do the job.

In my case, I depend on eeshugerman/postgres-backup Docker image which can do the heavy lifting using pg_dump under the hood.

Digital Ocean Spaces

Before diving into Kamal set up, we need a bucket. You can create one under Spaces Object Storage in the control panel. Choose a preferred region and skip the CDN option.

I name the bucket as [SERVICE]-backups.

Once the bucket is created you’ll also need S3_ACCESS_KEY_ID and S3_SECRET_ACCESS_KEY. You can do that under API -> Spaces Key here.

Kamal configuration

If you have the bucket ready, you can add the postgres-backup as another service and provide the required configuration:

# config/deploy.yml
accessories:
  # ...
  postgres-backups:
    image: eeshugerman/postgres-backup-s3:15
    roles:
      - web
    env:
      clear:
        SCHEDULE: "@daily"
        BACKUP_KEEP_DAYS: 14
        S3_REGION: "unused"
        S3_BUCKET: "tubeandchill-backups"
        S3_ENDPOINT: https://tubeandchill-backups.fra1.digitaloceanspaces.com
        S3_PREFIX: postgres
        POSTGRES_HOST: "tubeandchill-postgres"
        POSTGRES_USER: "tubeandchill"
        POSTGRES_DATABASE: "tubeandchill_production"
      secret:
        - POSTGRES_PASSWORD
        - S3_ACCESS_KEY_ID
        - S3_SECRET_ACCESS_KEY
        - PASSPHRASE
    options:
      network: "private"

The first important thing is the version of the image. I used version :15 for PostgreSQL 15 and assigned to run the service alongside the web role. This is because I deploy to a single VM. You can replace it with a specific host: for example.

Then comes the environment configuration:

  • SCHEDULE and BACKUP_KEEP_DAYS control the backup management time and persistance. I chose daily backups kept for 14 days.

  • S3_* variables determine the usual S3 storage settings. Note that in case of DO Spaces, S3_REGION is not used and part of S3_ENDPOINT.

  • POSTGRES_* variables match the ones we are already using for PostgreSQL itself.

  • PASSPHRASE is an optional password to provide if you want your backups to be encrypted.

If you depend on a Docker network for internal container communication, do not forget to add it here as well. This way I am able to reference POSTGRES_HOST as [SERVICE]-[ACCESSORY] instead of an IP address.

Creating automated backups

The next step is to boot the new service. But make sure to push the new environment variables first:

$ kamal env push
$ kamal accessory boot postgres-backups

If all is well, create a test backup and check in the DO control panel that a new backup appeared in the bucket:

$ kamal accessory exec -i --reuse postgres-backups "sh backup.sh"

If you make a mistake, remove the accessory, and start over:

$ kamal accessory remove postgres-backups
...
# Fix the configuration
...
$ kamal env push
$ kamal accessory boot postgres-backups

Restoring the database

To restore a particular backup, run:

$ kamal accessory exec postgres-backups "sh restore.sh 2024-02-06T11:11:11"

Note that the format of the timestamp is %Y-%m-%dT%H:%M:%S.

And that’s pretty much it! If you need a little bit more, check out my Kamal Handbook, a first ever book on Kamal.

Check out my book
Learn how to use Kamal to deploy your web applications with Kamal Handbook. Visualize Kamal's concepts, understand Kamal's configuration, and deploy practical life examples.
by Josef Strzibny
RSS